According to The Information, Anthropic is working on its next flagship model, Claude Opus 4.7, as well as a new AI-powered tool for creating websites and presentations. Both products could reportedly be released as early as this week.

The design tool is said to let both technical and non-technical users build presentations, websites, and landing pages using natural language prompts. That would place it in direct competition with Adobe, Figma, and Wix, as well as newer AI-focused products such as Gamma and Google Stitch. After reports of Anthropic’s plans emerged, shares of Adobe, Wix, and Figma reportedly fell by more than two percent.

Opus 4.7, however, is not expected to be Anthropic’s most powerful model. That role reportedly belongs to Claude Mythos, which is currently being tested by selected partners for vulnerability and security research.

At the same time, investor demand for Anthropic is accelerating. Business Insider reports that the company has recently received multiple offers from venture capital firms valuing it at up to $800 billion. That would be more than double the $380 billion valuation Anthropic reached in its February funding round. On secondary marketplace Caplight, Anthropic is already trading at a $688 billion valuation, up 75 percent over the past three months. For comparison, OpenAI was most recently valued at $852 billion.

The enthusiasm is being driven by Anthropic’s rapid growth. The company’s annualized revenue has reportedly climbed to $30 billion, up from $9 billion at the end of 2025. More than 1,000 enterprise customers are now spending over $1 million per year, and that figure is said to have doubled in less than two months.

Amid the surge in demand, Anthropic has also overhauled its Claude Enterprise pricing model. Instead of charging a flat fee of up to $200 per user per month, the company now reportedly charges enterprise customers a $20 base fee plus usage-based costs tied to compute consumption. For heavy users, that could double or even triple total costs. The shift appears to reflect rising inference expenses from compute-intensive agents such as Claude Code and Claude Cowork, as well as increasing pressure on available compute capacity.